This is the course diary for the Introduction to Open Data Science course at University of Helsinki. This MOOC course covers statistical concepts, which will be useful to my work as a PhD student. I’m looking forward to many interesting exercises!
Click here to go to the GitHub repository for this diary.
A data set, which contains answers from a survey conducted among students on an introductory statistics course, was examined.
There are 183 subjects, and the number reduces to 166 after excluding those who didn’t attend the final exam of the course. For each subject, there are 60 features, of which 56 are answers to survey questions. The other four are gender, age, points from the exam, and finally attitude, which is a summary variable from certain answers to the survey.
The questions in the survey pertained to the students’ attitude and approaches to learning. The three distinguish learning approaches were defined as surface level, deep level and strategic level. The 56 survey questions were condensed to these 3 variables (named surf, deep, stra), so in the end there are 7 features for each student.
Let’s see how the variables in the data look and compare to each other.
First note that genderwise two thirds of the students were female and one third male. Interestingly, while the males scored higher on attitude, both genders achieved very close to the same mean in the exam points. The majority of students are around 22 years old, but students of +50 years of age also participated in the course.
As one would expect, attitude has a notable correlation with the scored exam points. In terms of the learning approaches, surface-level learning negatively correlates with the points, which makes sense. Likewise, strategic learning approach tends to lead to higher score. However one might have expected deep level learning to have higher correlation with the exam points. Surface level and deep level learning approaches are opposite to each other, so they have some negative correlation.
In order to fit a linear regression model to predict the exam points of a student, we need to choose explanatory variables from the available features. Looking at how the other features correlate with the points, natural choices for the explanatory variables would be attitude, stra, and surf, as they have the highest absolute values for the linear correlation. The first two have positive correlation with the points, while the last one has negative.
Fitting the model gives us:
##
## Call:
## lm(formula = points ~ attitude + stra + surf, data = learning2014)
##
## Residuals:
## Min 1Q Median 3Q Max
## -17.1550 -3.4346 0.5156 3.6401 10.8952
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.01711 3.68375 2.991 0.00322 **
## attitude 0.33952 0.05741 5.913 1.93e-08 ***
## stra 0.85313 0.54159 1.575 0.11716
## surf -0.58607 0.80138 -0.731 0.46563
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.296 on 162 degrees of freedom
## Multiple R-squared: 0.2074, Adjusted R-squared: 0.1927
## F-statistic: 14.13 on 3 and 162 DF, p-value: 3.156e-08
So we can predict the exam points with the equation \[points = 0.34 * attitude + 0.85 * stra - 0.58 * surf + 11.01,\] where the other coefficients except attitude have a relatively high uncertainty. The t values for the parameters suggests that they each have statistical significance, in the same relative order as the values for the linear correlation. Most notably attitude is very significant, while surf could arguably be left out of the model. While the coefficient of the attitude variable is lower than the coefficients for stra and surf, one must take a look back at the distributions of the variables. While attitude ranges from 0 to 50, the other two only range from 0 to 5, making attitude much more decisive for the prediction of the exam points.
Let us however see what we get if surf is removed and the model is fitted only to attitude and stra.
##
## Call:
## lm(formula = points ~ attitude + stra, data = learning2014)
##
## Residuals:
## Min 1Q Median 3Q Max
## -17.6436 -3.3113 0.5575 3.7928 10.9295
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 8.97290 2.39591 3.745 0.00025 ***
## attitude 0.34658 0.05652 6.132 6.31e-09 ***
## stra 0.91365 0.53447 1.709 0.08927 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.289 on 163 degrees of freedom
## Multiple R-squared: 0.2048, Adjusted R-squared: 0.1951
## F-statistic: 20.99 on 2 and 163 DF, p-value: 7.734e-09
Now the equation is \[points = 0.35 * attitude + 0.91 * stra + 8.97.\]
In order to compare these two models, consider the multiple R-squared values. It is a measure of how well the model fits the data. At a quick glance, the first model has higher multiple R-squared value of 0.2074, while the latter has 0.2048. However, it needs to be taken into account that the more variables a model has, the higher the R-squared value will be. So in fact we need to look at the adjusted R-squared value, which also considers the number of variables. The second model has higher adjusted R-squared value of 0.1951 compared to the value 0.1927 of the first model. While there isn’t a huge difference, the second model may be slightly more accurate predictor.
Next it is worth to evaluate the validity of the model. First recall three key assumptions:
It is possible to examine how well these assumptions hold with a few diagnostic plots:
The left plot shows the residuals versus the fitted values. If any sort of structure is seen in the scatter plot, then the errors would be correlated to the variable values. This is also related to the linearity of the model, as strong non-linear structures in the plot would indicate that the model should also be non-linear. There is no notable structure in this plot, which means that the assumptions 1. and 3. hold fine.
The normal Q-Q plot in the middle shows whether the errors are normally distributed. Since the majority of the points in the plot follow the fitted line quite well, it can be concluded that the errors are can be approximated as normally distributed, and hence the assumption 2. also holds pretty good.
Finally the plot on the right shows the residuals versus leverage. It displays how strong of an effect single data points have on the fit. There are no points in the plot with highly significant leverage, which implies that there are no notable outliers and most of the points lie quite well around the fitted model.
The data set that is examined this week contains information about 382 Portuguese students at the secondary education level. This data set can be downloaded from here (which was wrangled from two data sets available here). For each student, there are 35 attributes:
## [1] "school" "sex" "age" "address" "famsize"
## [6] "Pstatus" "Medu" "Fedu" "Mjob" "Fjob"
## [11] "reason" "nursery" "internet" "guardian" "traveltime"
## [16] "studytime" "failures" "schoolsup" "famsup" "paid"
## [21] "activities" "higher" "romantic" "famrel" "freetime"
## [26] "goout" "Dalc" "Walc" "health" "absences"
## [31] "G1" "G2" "G3" "alc_use" "high_use"
The attributes contain information about the students’ social background, demographics, activities, school performance level (combined from two subjects: Portuguese and Mathematics) and alcohol consumption. We are especially interested in how alcohol consumption relates to the other features. There are 114 out of the 382 students who are considered to have high alcohol consumption.
Before digging in the data, consider a few variables, which could be related to the amount of alcohol consumption.
To see whether there really is a relation between high alcohol consumption and the four variables I highlighted, it is best to have a look at the relevant plots.
These plots support the hypotheses I stated earlier. High alcohol consumption leads to higher number of absences and lower study time. There is also a relation between high alcohol consumption and past class failures. While less than one third of the total number of students are considered to have high alcohol consumption, they make up about half of the students with one or more past class failures. The family relations are also on average lower among those students.
The relationship between these features and high alcohol consumption can be further examined with a logistic regression model, that tries to predict whether or not a student has high alcohol consumption based on the features.
##
## Call:
## glm(formula = high_use ~ absences + studytime + failures + famrel,
## family = "binomial", data = alc)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -2.1020 -0.8046 -0.6356 1.0929 2.1626
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 0.51942 0.61298 0.847 0.396787
## absences 0.07646 0.02237 3.417 0.000632 ***
## studytime -0.48275 0.15859 -3.044 0.002335 **
## failures 0.34535 0.18900 1.827 0.067659 .
## famrel -0.22547 0.12713 -1.773 0.076149 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 465.68 on 381 degrees of freedom
## Residual deviance: 426.85 on 377 degrees of freedom
## AIC: 436.85
##
## Number of Fisher Scoring iterations: 4
The coefficients of the model can be interpreted as odds ratios. Let’s check these ratios and their confidence intervals.
## Waiting for profiling to be done...
## odds_ratios 2.5 % 97.5 %
## (Intercept) 1.6810604 0.5041644 5.6209355
## absences 1.0794604 1.0353405 1.1306057
## studytime 0.6170848 0.4480057 0.8355182
## failures 1.4124849 0.9753615 2.0584625
## famrel 0.7981401 0.6215738 1.0249848
If an odd ratio is larger than one, then this feature is more positively correlated with the positive outcome (i.e. high alcohol consumption in our case). On the other hand, odd ratio of less than one means that it has negative correlation with the positive outcome. The earlier hypothesis was that absences and past failures would be positively correlated with high alcohol consumption, and study time and family relations would be negatively correlated. These odd ratios indeed support this hypothesis. However, in the case of the family relations, the confidence interval does extend to above one, so it is less reliable predictor than the others.
Unfortunately I didn’t quite have enough time to further evaluate the predictive power of the model.
This week, the data set under scrutiny contains housing values in the suburbs of Boston in the late 1970s (more information about the data set can be found here). It is part of the MASS library in R. Let’s have a look at its variables:
## 'data.frame': 506 obs. of 14 variables:
## $ crim : num 0.00632 0.02731 0.02729 0.03237 0.06905 ...
## $ zn : num 18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
## $ indus : num 2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
## $ chas : int 0 0 0 0 0 0 0 0 0 0 ...
## $ nox : num 0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
## $ rm : num 6.58 6.42 7.18 7 7.15 ...
## $ age : num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
## $ dis : num 4.09 4.97 4.97 6.06 6.06 ...
## $ rad : int 1 2 2 3 3 3 5 5 5 5 ...
## $ tax : num 296 242 242 222 222 222 311 311 311 311 ...
## $ ptratio: num 15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
## $ black : num 397 397 393 395 397 ...
## $ lstat : num 4.98 9.14 4.03 2.94 5.33 ...
## $ medv : num 24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
The data set consists of 506 neighbourhoods in Boston and for each there are 14 explanatory variables, such as crime rate per capita and nitric oxides concentration. Let’s then have a closer look at the variables and visualize them.
## crim zn indus chas
## Min. : 0.00632 Min. : 0.00 Min. : 0.46 Min. :0.00000
## 1st Qu.: 0.08205 1st Qu.: 0.00 1st Qu.: 5.19 1st Qu.:0.00000
## Median : 0.25651 Median : 0.00 Median : 9.69 Median :0.00000
## Mean : 3.61352 Mean : 11.36 Mean :11.14 Mean :0.06917
## 3rd Qu.: 3.67708 3rd Qu.: 12.50 3rd Qu.:18.10 3rd Qu.:0.00000
## Max. :88.97620 Max. :100.00 Max. :27.74 Max. :1.00000
## nox rm age dis
## Min. :0.3850 Min. :3.561 Min. : 2.90 Min. : 1.130
## 1st Qu.:0.4490 1st Qu.:5.886 1st Qu.: 45.02 1st Qu.: 2.100
## Median :0.5380 Median :6.208 Median : 77.50 Median : 3.207
## Mean :0.5547 Mean :6.285 Mean : 68.57 Mean : 3.795
## 3rd Qu.:0.6240 3rd Qu.:6.623 3rd Qu.: 94.08 3rd Qu.: 5.188
## Max. :0.8710 Max. :8.780 Max. :100.00 Max. :12.127
## rad tax ptratio black
## Min. : 1.000 Min. :187.0 Min. :12.60 Min. : 0.32
## 1st Qu.: 4.000 1st Qu.:279.0 1st Qu.:17.40 1st Qu.:375.38
## Median : 5.000 Median :330.0 Median :19.05 Median :391.44
## Mean : 9.549 Mean :408.2 Mean :18.46 Mean :356.67
## 3rd Qu.:24.000 3rd Qu.:666.0 3rd Qu.:20.20 3rd Qu.:396.23
## Max. :24.000 Max. :711.0 Max. :22.00 Max. :396.90
## lstat medv
## Min. : 1.73 Min. : 5.00
## 1st Qu.: 6.95 1st Qu.:17.02
## Median :11.36 Median :21.20
## Mean :12.65 Mean :22.53
## 3rd Qu.:16.95 3rd Qu.:25.00
## Max. :37.97 Max. :50.00
All of the variables are numerical. The chas variable is the only binary one, while the others have ranges of varying magnitudes. Some of the variable pairs are very strongly correlated, such as (rad, tax) and (nox, dis), while many pairs are relatively independent of each other.
In order to be able to do proper linear discriminant analysis (LDA) later, all the variables need to be scaled. This is done for each variable by subtracting the mean and dividing by the standard deviation. Here is how the variables look after scaling:
## crim zn indus chas
## Min. :-0.419367 Min. :-0.48724 Min. :-1.5563 Min. :-0.2723
## 1st Qu.:-0.410563 1st Qu.:-0.48724 1st Qu.:-0.8668 1st Qu.:-0.2723
## Median :-0.390280 Median :-0.48724 Median :-0.2109 Median :-0.2723
## Mean : 0.000000 Mean : 0.00000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.007389 3rd Qu.: 0.04872 3rd Qu.: 1.0150 3rd Qu.:-0.2723
## Max. : 9.924110 Max. : 3.80047 Max. : 2.4202 Max. : 3.6648
## nox rm age dis
## Min. :-1.4644 Min. :-3.8764 Min. :-2.3331 Min. :-1.2658
## 1st Qu.:-0.9121 1st Qu.:-0.5681 1st Qu.:-0.8366 1st Qu.:-0.8049
## Median :-0.1441 Median :-0.1084 Median : 0.3171 Median :-0.2790
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.5981 3rd Qu.: 0.4823 3rd Qu.: 0.9059 3rd Qu.: 0.6617
## Max. : 2.7296 Max. : 3.5515 Max. : 1.1164 Max. : 3.9566
## rad tax ptratio black
## Min. :-0.9819 Min. :-1.3127 Min. :-2.7047 Min. :-3.9033
## 1st Qu.:-0.6373 1st Qu.:-0.7668 1st Qu.:-0.4876 1st Qu.: 0.2049
## Median :-0.5225 Median :-0.4642 Median : 0.2746 Median : 0.3808
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 1.6596 3rd Qu.: 1.5294 3rd Qu.: 0.8058 3rd Qu.: 0.4332
## Max. : 1.6596 Max. : 1.7964 Max. : 1.6372 Max. : 0.4406
## lstat medv
## Min. :-1.5296 Min. :-1.9063
## 1st Qu.:-0.7986 1st Qu.:-0.5989
## Median :-0.1811 Median :-0.1449
## Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.6024 3rd Qu.: 0.2683
## Max. : 3.5453 Max. : 2.9865
Now all the variables are of comparable magnitude. Next let’s take the crim variable, which is the per capita crime rate, and turn it into a categorical variable crime using the quantiles as break points. The four categories of approximately equal size are named low, med_low, med_high and high:
##
## low med_low med_high high
## 127 126 126 127
Before we attempt to classify the data with LDA, the data is divided into train and test sets with 80/20 split:
## Train set size: 404
## Test set size: 102
With the scaled training set, crime is used as a target variable for the LDA and all the other variables are predictor variables.
Let’s see how well our LDA model fares when evaluated on the test set:
test_y <- test$crime
test_x <- dplyr::select(test, -crime)
lda.pred <- predict(lda.fit, newdata = test_x)
table(correct = test_y, predicted = lda.pred$class)
## predicted
## correct low med_low med_high high
## low 23 10 1 0
## med_low 3 14 4 0
## med_high 0 11 12 1
## high 0 0 1 22
The table shows that the neighbourhoods with the highest crime rates are identified very reliably, while the neighbourhoods with lower crime rates cannot be classified as successfully. Most notably, the LDA cannot distinguish medium low and medium high crime rate areas convincingly based on these explanatory features, and one neighbourhood with low crime rate was even classified as a medium high crime rate area.
Let’s now for a moment forget about the LDA that was done and try to also cluster the original data by using the k-means clustering algorithm. The data is first scaled similary as earlier. Let’s have a look at the summary of the Euclidian distances in the scaled data set:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.1343 3.4625 4.8241 4.9111 6.1863 14.3970
The minimum and maximum values indicate that some data points are indeed very close to each other when compared to the mean distance, but there’s also considerable distances between other points. This signifies that the data set is not homogeneous and the data points form some sorts of clusters. The optimal number of clusters can be investigated by running the k-means algorithm for a range of k and calculating the Within-Cluster-Sum-of-Squares (WCSS) value for each k. The plot for WCSS as a function k is shown below.
By design, the WCSS value decreases monotonously as k is increased. The optimal value for k can be found by considering the so-called elbow point, where there is a biggest drop in WCSS between two values. This implies that perhaps k = 2 is the optimal value, but k = 3 should work fine as well. After that, the slope becomes less steep at each point. The clusters can be visualized by looking at the variables and coloring the different groups. Here k = 3 was used:
The three categories certainly form their own clusters, though it is quite difficult to interpret how well it works because these plots are 2-dimensional, while the clusters exist in 14-dimensional space. Let us then combine LDA and the k-means clustering to see if LDA is capable of finding the clusters. All the 14 explanatory variables are used as predictors, while the thee cluster categories are used as a target.
From this plot we can see that as was observed earlier, there is clearly one cluster of data points separate from the majority of the data points. The rad and tax variables are the most influential separators for the clusters. It was seen in the earlier LDA biplot as well that rad was important in separating the high crime rate neighbourhoods from the others. We can verify whether cluster 1 and the high crime rate areas are indeed the same clusters by comparing two 3D plots of the LDA, one labeled with the crime categories and another with the k-means clusters. (You can rotate the plots by dragging them!)
It appears that that cluster 1 practically corresponds to the high crime rate neighbourhoods. The k-means clustering algorithm also did reasonable job with identifying the low crime rate neighbourhoods (cluster 2) and the medium low crime rate neighbourhoods (cluster 3). Of course the k-means algorithm operated with one category less than the LDA, so they cannot have a one-to-one correspondence.